首页> 外文OA文献 >Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis
【2h】

Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis

机译:超越面部轮换:全局和局部感知GaN用于真实感   和身份保持正面视图合成

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Photorealistic frontal view synthesis from a single face image has a widerange of applications in the field of face recognition. Although data-drivendeep learning methods have been proposed to address this problem by seekingsolutions from ample face data, this problem is still challenging because it isintrinsically ill-posed. This paper proposes a Two-Pathway GenerativeAdversarial Network (TP-GAN) for photorealistic frontal view synthesis bysimultaneously perceiving global structures and local details. Four landmarklocated patch networks are proposed to attend to local textures in addition tothe commonly used global encoder-decoder network. Except for the novelarchitecture, we make this ill-posed problem well constrained by introducing acombination of adversarial loss, symmetry loss and identity preserving loss.The combined loss function leverages both frontal face distribution andpre-trained discriminative deep face models to guide an identity preservinginference of frontal views from profiles. Different from previous deep learningmethods that mainly rely on intermediate features for recognition, our methoddirectly leverages the synthesized identity preserving image for downstreamtasks like face recognition and attribution estimation. Experimental resultsdemonstrate that our method not only presents compelling perceptual results butalso outperforms state-of-the-art results on large pose face recognition.
机译:从单个面部图像进行逼真的正面视图合成在面部识别领域中具有广泛的应用。尽管已经提出了数据驱动的深度学习方法来通过从充足的面部数据中寻求解决方案来解决该问题,但是由于其固有的不适性,该问题仍然具有挑战性。本文通过同时感知全局结构和局部细节,提出了一种用于真实感前视图合成的两路生成专家网络(TP-GAN)。除了常用的全局编码器-解码器网络之外,还提出了四个具有里程碑意义的补丁网络来处理局部纹理。除了新颖的架构外,我们通过引入对抗性损失,对称性损失和身份保留损失的组合来很好地解决这个不适的问题。组合损失函数利用正面脸部分布和预先训练的区分性深脸模型来指导人的身份保留推断个人资料的正面视图。与以前的深度学习方法主要依靠中间特征进行识别不同,我们的方法直接利用合成的身份保留图像来完成下游任务,例如人脸识别和归因估计。实验结果表明,我们的方法不仅可以提供令人信服的感知结果,而且在大型人脸识别方面也能胜过最新技术。

著录项

相似文献

  • 外文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号